1 Abstract

Blabla.

2 Introduction

Computer vision is a field of artificial intelligence in which a machine is taught how to extract and interpret the content of an image (Krizhevsky, Sutskever, and Hinton 2012). Computer vision relies on deep learning that allows computational models to learn from training data – a set of manually labelled images – and make predictions on new data – a set of unlabelled images (Baraniuk, Donoho, and Gavish 2020; LeCun, Bengio, and Hinton 2015). With the growing availability of massive data, computer vision with deep learning is being increasingly used to perform important tasks such as object detection, face recognition, action and activity recognition or human pose estimation in fields as diverse as medicine, robotics, transportation, genomics, sports and agriculture (Voulodimos et al. 2018).

In ecology in particular, there is a growing interest in deep learning for automatizing repetitive analyses on large amount of images, such as identifying plant and animal species, distinguishing individuals of the same or different species, counting individuals or detecting relevant features (Christin, Hervet, and Lecomte 2019; Lamba et al. 2019; Weinstein 2018). By saving hours of manual data analyses and tapping into massive amounts of data that keep accumulating with technological advances, deep learning has the potential to become an essential tool for ecologists and applied statisticians.

Despite the promising future of computer vision and deep learning, there are challenging issues toward their wide adoption by the community of ecologists (Wearn, Freeman, and Jacoby 2019). First, there is a programming barrier as most, if not all, algorithms are written in the Python language while most ecologists are better versed in R (Lai et al. 2019). If ecologists are to use computer vision in routine, there is a need for bridges between these two languages (through, e.g., the reticulate package Allaire et al. (2017) or the shiny package Tabak et al. (2020)). Second, most recent applications of computer vision via deep learning in ecology (short WoS review and Table?) have focused on computational aspects and simple tasks without addressing the underlying ecological questions (Sutherland et al. 2013), or carrying out the statistical data analysis (Gimenez et al. 2014). Although perfectly understandable given the challenges at hand, we argue that a better integration of the why (ecological questions), the what (data) and the how (statistics) would be beneficial to computer vision for ecology (see also Weinstein 2018). (Develop here, speak about tradeoffs, and relevance)

Here, we showcase a full why-what-how workflow in R using a case study on elucidating the structure of an ecological community (a set of co-occurring species), namely that of the Eurasian lynx (Lynx lynx) and its main preys. First, we introduce the case study and motivate the need for deep learning. Second we illustrate deep learning for the identification of animal species in large amounts of images, including model training and validation with a dataset of labelled images, and prediction with a new dataset of unlabelled images. Last, we proceed with the quantification of spatial co-occurrence using statistical models. (Main conclusion no need to go too far in the DL to get reasonable answer to ecological question) We hope that our reproducible workflow will be useful to ecologists and applied statisticians.

3 Collecting images with camera traps

Lynx (Lynx lynx) went extinct in France at the end of the 19th century due to habitat degradation, human persecution and decrease in prey availability (Vandel and Stahl 2005). The species was reintroduced in Switzerland in the 1970s (Breitenmoser 1998), then re-colonised France through the Jura mountains in the 1980s (Vandel and Stahl 2005). The species is listed as endangered under the 2017 IUCN Red list and is of conservation concern in France due to habitat fragmentation, poaching and collisions with vehicles. The Jura holds the bulk of the French lynx population.

To better understand its distribution, we need to quantify its interactions with its main preys, roe deer (Capreolus capreolus) and chamois (Rupicapra rupicapra) (Molinari-Jobin et al. 2007), two ungulate species that are also hunted. To assess the relative contribution of predation and hunting, a predator-prey program was set up jointly by the French Office for Biodiversity, the Federations of Hunters from the Jura, Ain and Haute-Savoie counties and the French National Centre for Scientific Research.

Animal detections were made using a set of camera traps in the Jura mountains that were deployed in the Jura and Ain counties (see Figure 1). We divided the two study areas into grids of 2.7 \(\times\) 2.7 km cells or sites hereafter (Zimmermann et al. 2013) in which we set two camera traps per site (Xenon white flash with passive infrared trigger mechanisms, model Capture, Ambush and Attack; Cuddeback), with 18 sites in the Jura study area, and 11 in the Ain study area that were active over the study period (from February 2016 to October 2017 for the Jura county, and from February 2017 to May 2019 for the Ain county). Camera traps were checked weekly to change memory cards, batteries and to remove fresh snow after heavy snowfall.

**Figure 1**: Study area, grid and camera trap locations.

Figure 1: Study area, grid and camera trap locations.

In total, 45563 and 18044 pictures were considered in the Jura and Ain sites respectively after manually droping empty pictures and pictures with unidentified species. We identified the species present on all images by hand (see Table 1) using digiKam a free open-source digital photo management application (https://www.digikam.org/). This operation took several weeks of labor full time, which is often identified as a limitation for camera trap studies. Computer vision with deep learning has been identified as a promising approach to expedite this tedious task (Norouzzadeh et al. 2021; Tabak et al. 2019; Willi et al. 2019). Labelled images for the Ain county are available from Zenodo link and that for the Jura county are available from Zenodo link.

Table 1: Species identified in the Jura and Ain study sites with samples size (n). Only first 10 species with most images are shown.
Species in Jura study site n Species in Ain study site n
human 31644 human 4946
vehicule 5637 vehicule 4454
dog 2779 dog 2310
fox 2088 fox 1587
chamois 919 rider 1025
wild board 522 roe deer 860
badger 401 chamois 780
roe deer 368 hunter 593
cat 343 wild board 514
lynx 302 badger 461

4 Deep learning for species identification

From the images we obtained with camera traps (Table 1), we trained a model for identifying species using the Jura study site as a calibration dataset. We then assessed this model’s ability to automatically identify species on a new dataset, or transferability, using the Ain study site as an evaluation dataset.

4.1 Training - Jura study site

We selected at random 80% of the annotated images for each species in the Jura study site for training, and 20% for testing. We applied various transformations (flipping, brightness and contrast modifications; Shorten and Khoshgoftaar (2019)) to improve training (see Appendix). To reduce model training time and overcome the small number of images, we used transfer learning (Yosinski et al. 2014; Shao, Zhu, and Li 2015) and used a pre-trained model as a starting point for training our model to perform species identification. Specifically, we trained a deep convolutional neural network (ResNet-50) architecture (He et al. 2016) using the fastai library (https://docs.fast.ai/) that implements the PyTorch library (Paszke et al. 2019). Interestingly, the fastai library comes with an R interface (https://eagerai.github.io/fastai/) that uses the reticulate package to communicate with Python, therefore allowing R users to access Python up-to-date deep learning tools. We trained models on the Montpellier Bioinformatics Biodiversity platform using a GPU machine (Titan Xp nvidia) with 16Go of RAM. We used 20 epochs which took approximately 10 hours. The computational burden prevented us from providing a full reproducible analysis, but we do so with a subsample of the dataset in the Appendix. All trained models are available from https://doi.org/10.5281/zenodo.5164796.

We calculated three metrics to evaluate our model performance at correctly identifying species (e.g. Duggan et al. 2021). Specifically, we relied on accuracy the ratio of correct predictions to the total number of predictions, recall a measure of false negatives (FN; e.g. an image with a lynx for which our model predicts another species) with recall = TP / (TP + FN) where TP is for true positives, and precision a measure of false positives (FP; e.g. an image with any species but a lynx for which our predicts a lynx) with precision = TP / (TP + FP). In camera trap studies, a strategy consists in optimizing precision if the focus is on rare species, while recall should be optimized if the focus is on commom species (Duggan et al. 2021).

We achieved 85% accuracy during training. Our model had good performances for the three classes we were interested in, 87% precision for lynx and 81% recall for both roe deer and chamois (Table 2).

Table 2: Model training performance. Images from the Jura study site were used for training.
species precision recall
badger 0.78 0.88
deer 0.67 0.21
chamois 0.86 0.81
cat 0.89 0.78
roe deer 0.67 0.81
dog 0.78 0.84
human 0.99 0.79
hare 0.32 0.52
lynx 0.87 0.95
fox 0.85 0.90
wild boar 0.93 0.88
vehicule 0.95 0.98

4.2 Transferability - Ain study site

We evaluated transferability for our trained model by predicting species on images from the Ain study site which were not used for training. Precision was 0.77% for lynx, and while we achieved 0.86% recall for roe deer, our model performed poorly for chamois with 0.08% recall (Table 3). To better understand this pattern, we display the results under the form of a confusion matrix that compares model classifications to manual classifications (Figure 2). There were a lot of false negatives for chamois, meaning that when chamois was actually present in an image, it was often classified as another species by our model. Overall, our model trained on images from the Jura study site did poorly at predicting correctly species on images from the Ain study site. This result does not come as a surprise, as generalizing classification algorithms to new environments is known to be difficult (Beery, Horn, and Perona 2018). While a computer scientist would be disappointed in these results, an ecologist will wonder whether ecological inference on the interactions between lynx and its prey is biased by these average performances, a question we address in the next section.

Table 3: Model transferability performance. Images from the Ain study site were used for assessing transferability.
precision recall
badger 0.71 0.89
rider 0.79 0.92
deer 0.00 0.00
chamois 0.82 0.08
hunter 0.17 0.11
cat 0.46 0.59
roe deer 0.67 0.86
dog 0.77 0.35
human 0.51 0.93
hare 0.37 0.35
lynx 0.77 0.89
marten 0.05 0.04
fox 0.90 0.53
wild board 0.75 0.94
cow 0.01 0.25
vehicule 0.94 0.51

Figure 2: Confusion matrix comparing automatic to manual species classifications. Species that were predicted by our model are in columns, and species that are actually in the images are in rows.

5 Spatial co-occurrence

Expliquer qu’on compare une analyse des interactions en prenant les données Jura + Ain manually labelled, supposed to be error-free, aux données Jura + Ain automatically tagged, pour évaluer empiriquement le biais éventuel dans ecological inference.

In this section, we analysed the data we acquired from the previous section. For the sake of comparison, we considered two datasets, a dataset made of the images manually labelled for both the Jura and Ain study sites pooled together (ground truth dataset), and another dataset in which we pooled the images that were manually labelled for the Jura study site and the images that we automatically labelled for the Ain study site using our trained model (classified dataset).

We formatted the data by generating monthly detection histories, that is a sequence of detections (\(Y_{sit} = 1\)) and non-detections (\(Y_{sit} = 0\)), for species \(s\) at site \(i\) and sampling occasion \(t\) (see Figure 3).

**Figure 3**: Detections (black) and non-detections (light grey) for each of the 3 species lynx, chamois and roe deer. Sites are on the Y axis, while sampling occasions are on the X axis. Only data from the ground truth dataset are displayed.

Figure 3: Detections (black) and non-detections (light grey) for each of the 3 species lynx, chamois and roe deer. Sites are on the Y axis, while sampling occasions are on the X axis. Only data from the ground truth dataset are displayed.

To quantify spatial co-occurrence betwen lynx and its preys, we used a multispecies occupancy modeling approach (Rota et al. 2016) using the R package unmarked (Fiske and Chandler 2011). The multispecies occupancy model assumes that observations \(y_{sit}\), conditional on the latent occupancy state of species \(s\) at site \(i\) (\(Z_{si}\)) are drawn from Bernoulli random variables \(Y_{sit} | Z_{si} \sim \mbox{Bernoulli}(Z_{si}p_{sit})\) where \(p_{sit}\) is the detection probability of species \(s\) at site \(i\) and sampling occasion \(t\). Detection probabilities can be modeled as a function of site and/or sampling covariates, or the presence/absence of other species, but for the sake of illustration here, we will make them species-specific only.

The latent occupancy states are assumed to be distributed as multivariate Bernoulli random variables (Dai, Ding, and Wahba 2013). Let us consider on 2 species, species 1 and 2, then \(Z_i = (Z_{i1}, Z_{i2}) \sim \mbox{multivariate Bernoulli}(\psi_{11}, \psi_{10}, \psi_{01}, \psi_{00})\) where \(\psi_{11}\) is the probability that a site is occupied by both species 1 and 2, \(\psi_{10}\) the probability that a site is occupied by species 1 but not 2, \(\psi_{01}\) the probability that a site is occupied by species 2 but not 1, and \(\psi_{00}\) the probability a site is occupied by none of them. Note that we considered species-specific only occupancy probabilities but these could be modeled as site-specific covariates. Marginal occupancy probabilities are simply obtained as \(\Pr(Z_{i1}=1) = \psi_{11} + \psi_{10}\) and \(\Pr(Z_{i2}=1) = \psi_{11} + \psi_{01}\). With this model, we may also infer potential interactions by calculating conditional probabilities such as for example the probability of a site being occupied by species 2 conditional of species 1 \(\Pr(Z_{i2} = 1| Z_{i1} = 1) = \displaystyle{\frac{\psi_{11}}{\psi_{11}+\psi_{10}}}\).

Probabilités marginales en Figure 4, et probabilités de présence du lynx, conditionnellement à la présence ou absence de ses proies en Figure 5. Il y a un léger biais dans l’estimation de la probabilité de présence du lynx sachant la présence de ses deux proies favorites quand on se fie à l’étiquetage automatique des photos. A commenter en fonction des résultats du deep learning. Etant donné que les différences ne sont pas énormes, l’écologue pourra décider de les ignorer au regard du temps gagné par rapport à un étiquetage à la main. Maintenant le biais est plus important sur la probabilité de présence du lynx sachant la présence du chevreuil et l’absence du chamois qui elle est sous-estimée. There is a small bias in the occupancy prob of lynx given the presence of both its prey when we rely on automatic tagging. The difference in estimates is small, and we may well decide to ignore this bias and save a lot of time by not tagging pix manually. Now we have a bigger issue w/ the occupancy prob for lynx given roe deer presence and chamois absence, which is clearly under-estimated.

**Figure 4**: Marginal occupancy probabilities for all three species, lynx, roed deer and chamois). Parameter estimates are from a multispecies occupancy model using either the ground truth dataset (in red) or the classified dataset (in blue-grey).

Figure 4: Marginal occupancy probabilities for all three species, lynx, roed deer and chamois). Parameter estimates are from a multispecies occupancy model using either the ground truth dataset (in red) or the classified dataset (in blue-grey).

**Figure 5**: Lynx occupancy probability conditional on the presence or absence of its preys (roed deer and chamois). Parameter estimates are from a multispecies occupancy model using either the ground truth dataset (in red) or the classified dataset (in blue-grey).

Figure 5: Lynx occupancy probability conditional on the presence or absence of its preys (roed deer and chamois). Parameter estimates are from a multispecies occupancy model using either the ground truth dataset (in red) or the classified dataset (in blue-grey).

6 Discussion

To our knowledge, our model (Tabak Machine learning to classify animal species in camera trap images: Applications in ecology) achieved the highest accuracy (97.6%) to date in using machine learning to classify wildlife in camera trap images (a recent paper achieved 95% accuracy; Norouzzadeh et al., 2018).

which is especially useful for researchers studying the species and groups available in this package (Table 1) in North America, as it performed well (82% accuracy) in classifying ungulates in an out-of- sample test of images from Canada.

The ability to rapidly identify millions of images from camera traps can fundamentally change the way ecologists design and implement wildlife studies. The burden of classifying images from camera traps has led ecologists to limit the duration and size of camera trap studies (Kelly et al., 2008; Scott et al., 2018). By removing this burden, camera traps can be applied in more studies including monitoring invasive or sensitive species, long-term ecological research and small-scale occupancy studies.

En conclusion, l’utilisation d’un modèle entrainé sur un site pour prédire sur un autre site est délicate. Il est facile de se perdre dans les dédales du deep learning, mais il faut garder le cap de la question écologique, et on peut accepter des performances moyennes des algorithmes si le biais engendré sur les indicateurs écologiques est faible. Malgré tout, on peut faire mieux, et nous développons actuellement des modèles de distribution d’espèce qui prendrait à la fois en compte les interactions et les faux positifs et faux négatifs. Pour aller plus loin avec le deep learning et l’analyse d’images, nous renvoyons vers Miele, Dray, and Gimenez (2021).

Ongoing work to include covariates.

The lynx (Lynx lynx, Linné 1758) is yet again present into the Jura Mountains since the 80’s. In order to sustain his return into a functional ecosystem, we need to understand the factors that affect lynx and his preys distribution, the roe deer (Capreolus capreolus, Linné 1758) and the chamois (Rupicapra rupicapra, Linné 1758). How do environmental variables (forest cover, human disturbance) affect lynx and his preys presence and their co-occurrence in the French Jura ? What is the relative contribution of habitat preferences and prey-predator relationships ? To answer these questions, we used the multi-species occupancy model developed by Rota et al. (2016) which accounts for species imperfect detection. Thanks to this model, we quantified the lynx presence in function of environmental variables and the presence and absence of his preys, by using data from a non invasive camera trapping monitoring protocol. We show that the lynx and his preys presence is equally influenced by forest cover and the presence of lynx, chamois or roe deer. Therefore, we need to account for interactions between species in relation with habitat quality in inferring the occupancy of these species.

For much of the last century, ecologists have typically interpreted the diversity and composition of communities as the outcome of local-scale processes, both biotic (e.g. competition and predation) and abiotic (e.g. temperature and nutrients).

Some of the most challenging questions in ecology concern communities: sets of co-occurring species.

Tabek et al. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2 : For example, when using machine learning model output to design occupancy and abundance models, we can incorporate accuracy estimates that were generated when conducting model testing. The error of a machine learning model in identifying species from camera traps is similar to the problem of imperfect detection of wildlife when conducting field surveys (McIntyre, Majelantle, Slip, & Harcourt, 2020). Wildlife are often not detected when they are present (false negatives) and occasionally detected when they are absent (false positives); ecologists have developed models to effectively estimate occupancy when data have these types of errors (Guillera-Arroita, Lahoz-Monfort, van Rooyen, Weeks, & Tingley, 2017; Royle & Link, 2006). We can use Bayesian occupancy and abundance models where the central tendencies of the prior distributions for the false negative and false-positive error rates are derived from validation of our machine learning models. While we would expect false-positive rates in occupancy models to resemble the false-positive error rates for the machine learning model, false-negative error rates would be a function of the both the machine learning model and the propensity for some species to avoid detection by cameras when they are present (Tobler, Zúñiga Hartley, Carrillo-Percastegui, & Powell, 2015).

7 Session information

## R version 4.0.2 (2020-06-22)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Catalina 10.15.7
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] fr_FR.UTF-8/fr_FR.UTF-8/fr_FR.UTF-8/C/fr_FR.UTF-8/fr_FR.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] unmarked_1.1.1         lattice_0.20-41        janitor_2.1.0         
##  [4] highcharter_0.8.2      fastai_2.0.9           ggtext_0.1.1          
##  [7] wesanderson_0.3.6.9000 kableExtra_1.3.4       stringi_1.7.3         
## [10] lubridate_1.7.10       cowplot_1.1.1          sf_0.9-7              
## [13] forcats_0.5.1          stringr_1.4.0          dplyr_1.0.7           
## [16] purrr_0.3.4            readr_2.0.0            tidyr_1.1.3           
## [19] tibble_3.1.3           ggplot2_3.3.5          tidyverse_1.3.0       
## 
## loaded via a namespace (and not attached):
##  [1] fs_1.5.0             xts_0.12.1           bit64_4.0.5         
##  [4] webshot_0.5.2        httr_1.4.2           tools_4.0.2         
##  [7] backports_1.2.1      bslib_0.2.4          utf8_1.2.2          
## [10] R6_2.5.0             KernSmooth_2.23-18   DBI_1.1.1           
## [13] colorspace_2.0-2     raster_3.4-13        sp_1.4-5            
## [16] withr_2.4.2          tidyselect_1.1.1     bit_4.0.4           
## [19] curl_4.3.2           compiler_4.0.2       cli_3.0.1           
## [22] rvest_1.0.0          xml2_1.3.2           sass_0.3.1.9001     
## [25] scales_1.1.1         classInt_0.4-3       proxy_0.4-26        
## [28] systemfonts_1.0.1    digest_0.6.27        rmarkdown_2.7       
## [31] svglite_2.0.0        pkgconfig_2.0.3      htmltools_0.5.1.9002
## [34] highr_0.9            dbplyr_2.1.0         fastmap_1.1.0       
## [37] htmlwidgets_1.5.3    rlang_0.4.11.9001    readxl_1.3.1        
## [40] TTR_0.24.2           rstudioapi_0.13      quantmod_0.4.18     
## [43] farver_2.1.0         jquerylib_0.1.3      generics_0.1.0      
## [46] zoo_1.8-8            jsonlite_1.7.2       vroom_1.5.4         
## [49] rlist_0.4.6.1        magrittr_2.0.1       Matrix_1.3-2        
## [52] Rcpp_1.0.7           munsell_0.5.0        fansi_0.5.0         
## [55] reticulate_1.20      lifecycle_1.0.0      yaml_2.2.1          
## [58] snakecase_0.11.0     MASS_7.3-53.1        plyr_1.8.6          
## [61] grid_4.0.2           parallel_4.0.2       crayon_1.4.1        
## [64] haven_2.4.3          gridtext_0.1.4       hms_1.1.0           
## [67] knitr_1.33           pillar_1.6.2         igraph_1.2.6        
## [70] codetools_0.2-18     reprex_1.0.0         glue_1.4.2          
## [73] evaluate_0.14        data.table_1.14.0    modelr_0.1.8        
## [76] vctrs_0.3.8          png_0.1-7            tzdb_0.1.2          
## [79] cellranger_1.1.0     gtable_0.3.0         assertthat_0.2.1    
## [82] xfun_0.25            broom_0.7.9          e1071_1.7-7         
## [85] class_7.3-18         viridisLite_0.4.0    units_0.7-2         
## [88] ellipsis_0.3.2

8 Acknowledgments

ANR. Folks who have labeled pix if not co-authors. MBB folks. Vincent Miele for his help along the way, and being an inspiration. Ajouter vos propres remerciements.

References

Allaire, JJ, Kevin Ushey, Yuan Tang, and Dirk Eddelbuettel. 2017. Reticulate: R Interface to Python. https://github.com/rstudio/reticulate.

Baraniuk, Richard, David Donoho, and Matan Gavish. 2020. “The Science of Deep Learning.” Proceedings of the National Academy of Sciences 117 (48): 30029–32. https://doi.org/10.1073/pnas.2020596117.

Beery, Sara, Grant van Horn, and Pietro Perona. 2018. “Recognition in Terra Incognita.” arXiv:1807.04975 [Cs, Q-Bio], July. http://arxiv.org/abs/1807.04975.

Breitenmoser, Urs. 1998. “Large Predators in the Alps: The Fall and Rise of Man’s Competitors.” Biological Conservation, Conservation Biology and Biodiversity Strategies, 83 (3): 279–89. https://doi.org/10.1016/S0006-3207(97)00084-0.

Christin, Sylvain, Éric Hervet, and Nicolas Lecomte. 2019. “Applications for Deep Learning in Ecology.” Edited by Hao Ye. Methods in Ecology and Evolution 10 (10): 1632–44. https://doi.org/10.1111/2041-210X.13256.

Dai, Bin, Shilin Ding, and Grace Wahba. 2013. “Multivariate Bernoulli Distribution.” Bernoulli 19 (4). https://doi.org/10.3150/12-BEJSP10.

Duggan, Matthew T., Melissa F. Groleau, Ethan P. Shealy, Lillian S. Self, Taylor E. Utter, Matthew M. Waller, Bryan C. Hall, Chris G. Stone, Layne L. Anderson, and Timothy A. Mousseau. 2021. “An Approach to Rapid Processing of Camera Trap Images with Minimal Human Input.” Ecology and Evolution. https://doi.org/https://doi.org/10.1002/ece3.7970.

Fiske, Ian, and Richard Chandler. 2011. “unmarked: An R Package for Fitting Hierarchical Models of Wildlife Occurrence and Abundance.” Journal of Statistical Software 43 (10): 1–23. https://www.jstatsoft.org/v43/i10/.

Gimenez, Olivier, Stephen T. Buckland, Byron J. T. Morgan, Nicolas Bez, Sophie Bertrand, Rémi Choquet, Stéphane Dray, et al. 2014. “Statistical Ecology Comes of Age.” Biology Letters 10 (12): 20140698. https://doi.org/10.1098/rsbl.2014.0698.

He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. “Deep Residual Learning for Image Recognition.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. https://doi.org/10.1109/CVPR.2016.90.

Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. 2012. “ImageNet Classification with Deep Convolutional Neural Networks.” In Advances in Neural Information Processing Systems 25, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc.

Lahoz-Monfort, José J, and Michael J L Magrath. 2021. “A Comprehensive Overview of Technologies for Species and Habitat Monitoring and Conservation.” BioScience. https://doi.org/10.1093/biosci/biab073.

Lai, Jiangshan, Christopher J. Lortie, Robert A. Muenchen, Jian Yang, and Keping Ma. 2019. “Evaluating the Popularity of R in Ecology.” Ecosphere 10 (1). https://doi.org/10.1002/ecs2.2567.

Lamba, Aakash, Phillip Cassey, Ramesh Raja Segaran, and Lian Pin Koh. 2019. “Deep Learning for Environmental Conservation.” Current Biology 29 (19): R977–R982. https://doi.org/10.1016/j.cub.2019.08.016.

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. “Deep Learning.” Nature 521 (7553): 436–44. https://doi.org/10.1038/nature14539.

Miele, Vincent, Stéphane Dray, and Olivier O. Gimenez. 2021. “Images, écologie et deep learning.” Regards sur la biodiversité, February. https://hal.archives-ouvertes.fr/hal-03142486.

Miele, Vincent, Gaspard Dussert, Bruno Spataro, Simon Chamaillé‐Jammes, Dominique Allainé, and Christophe Bonenfant. 2021. “Revisiting Animal Photo‐identification Using Deep Metric Learning and Network Analysis.” Edited by Robert Freckleton. Methods in Ecology and Evolution 12 (5): 863–73. https://doi.org/10.1111/2041-210X.13577.

Molinari-Jobin, Anja, Fridolin Zimmermann, Andreas Ryser, Christine Breitenmoser-Würsten, Simon Capt, Urs Breitenmoser, Paolo Molinari, Heinrich Haller, and Roman Eyholzer. 2007. “Variation in Diet, Prey Selectivity and Home-Range Size of Eurasian Lynx Lynx Lynx in Switzerland.” Wildlife Biology 13 (4): 393–405. https://doi.org/10.2981/0909-6396(2007)13[393:VIDPSA]2.0.CO;2.

Norouzzadeh, Mohammad Sadegh, Dan Morris, Sara Beery, Neel Joshi, Nebojsa Jojic, and Jeff Clune. 2021. “A Deep Active Learning System for Species Identification and Counting in Camera Trap Images.” Edited by Matthew Schofield. Methods in Ecology and Evolution 12 (1): 150–61. https://doi.org/10.1111/2041-210X.13504.

Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. 2019. “PyTorch: An Imperative Style, High-Performance Deep Learning Library.” In Advances in Neural Information Processing Systems 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlché-Buc, E. Fox, and R. Garnett, 8024–35. Curran Associates, Inc. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf.

Rota, Christopher T., Marco A. R. Ferreira, Roland W. Kays, Tavis D. Forrester, Elizabeth L. Kalies, William J. McShea, Arielle W. Parsons, and Joshua J. Millspaugh. 2016. “A Multispecies Occupancy Model for Two or More Interacting Species.” Methods in Ecology and Evolution 7 (10): 1164–73. https://doi.org/https://doi.org/10.1111/2041-210X.12587.

Shao, Ling, Fan Zhu, and Xuelong Li. 2015. “Transfer Learning for Visual Categorization: A Survey.” IEEE Transactions on Neural Networks and Learning Systems 26 (5): 1019–34. https://doi.org/10.1109/TNNLS.2014.2330900.

Shorten, Connor, and Taghi M. Khoshgoftaar. 2019. “A Survey on Image Data Augmentation for Deep Learning.” Journal of Big Data 6 (1): 60. https://doi.org/10.1186/s40537-019-0197-0.

Sutherland, William J., Robert P. Freckleton, H. Charles J. Godfray, Steven R. Beissinger, Tim Benton, Duncan D. Cameron, Yohay Carmel, et al. 2013. “Identification of 100 Fundamental Ecological Questions.” Edited by David Gibson. Journal of Ecology 101 (1): 58–67. https://doi.org/10.1111/1365-2745.12025.

Tabak, Michael A., Mohammad S. Norouzzadeh, David W. Wolfson, Erica J. Newton, Raoul K. Boughton, Jacob S. Ivan, Eric A. Odell, et al. 2020. “Improving the Accessibility and Transferability of Machine Learning Algorithms for Identification of Animals in Camera Trap Images: MLWIC2.” Ecology and Evolution 10 (19): 10374–83. https://doi.org/10.1002/ece3.6692.

Tabak, Michael A., Mohammad S. Norouzzadeh, David W. Wolfson, Steven J. Sweeney, Kurt C. Vercauteren, Nathan P. Snow, Joseph M. Halseth, et al. 2019. “Machine Learning to Classify Animal Species in Camera Trap Images: Applications in Ecology.” Edited by Theoni Photopoulou. Methods in Ecology and Evolution 10 (4): 585–90. https://doi.org/10.1111/2041-210X.13120.

Vandel, Jean-Michel, and Philippe Stahl. 2005. “Distribution Trend of the Eurasian Lynx Lynx Lynx Populations in France.” Mammalia 69 (2). https://doi.org/10.1515/mamm.2005.013.

Voulodimos, Athanasios, Nikolaos Doulamis, Anastasios Doulamis, and Eftychios Protopapadakis. 2018. “Deep Learning for Computer Vision: A Brief Review.” Edited by Diego Andina. Computational Intelligence and Neuroscience 2018 (February): 7068349. https://doi.org/10.1155/2018/7068349.

Wearn, Oliver R., Robin Freeman, and David M. P. Jacoby. 2019. “Responsible AI for Conservation.” Nature Machine Intelligence 1 (2): 72–73. https://doi.org/10.1038/s42256-019-0022-7.

Weinstein, Ben G. 2018. “A Computer Vision for Animal Ecology.” Edited by Laura Prugh. Journal of Animal Ecology 87 (3): 533–45. https://doi.org/10.1111/1365-2656.12780.

Willi, Marco, Ross T. Pitman, Anabelle W. Cardoso, Christina Locke, Alexandra Swanson, Amy Boyer, Marten Veldthuis, and Lucy Fortson. 2019. “Identifying Animal Species in Camera Trap Images Using Deep Learning and Citizen Science.” Edited by Oscar Gaggiotti. Methods in Ecology and Evolution 10 (1): 80–91. https://doi.org/10.1111/2041-210X.13099.

Yosinski, Jason, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. “How Transferable Are Features in Deep Neural Networks?” In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, 3320–8. NIPS’14. Cambridge, MA, USA: MIT Press.

Zimmermann, Fridolin, Christine Breitenmoser-Würsten, Anja Molinari-Jobin, and Urs Breitenmoser. 2013. “Optimizing the Size of the Area Surveyed for Monitoring a Eurasian Lynx (Lynx Lynx) Population in the Swiss Alps by Means of Photographic Capture-Recapture.” Integrative Zoology 8 (3): 232–43. https://doi.org/10.1111/1749-4877.12017.